Spectral Analysis of Linear Operators
Definition: Let A : V → V A:V \to V A : V → V be a linear transformation defined over vector space V V V . A subspace W W W of V V V is called an invariant under A A A if A ( x ) ∈ W A(x) \in W A ( x ) ∈ W for all x ∈ W x \in W x ∈ W .
Let A A A be a linear transformation defined over R 2 \mathbb{R}^2 R 2 such that A ( x , y ) = ( x + y , x − y ) A(x,y) = (x+y, x-y) A ( x , y ) = ( x + y , x − y ) . Then, W = { ( x , 0 ) ∈ R 2 ∣ x ∈ R } W = \{(x,0) \in \mathbb{R}^2 \mid x \in \mathbb{R}\} W = {( x , 0 ) ∈ R 2 ∣ x ∈ R } is an invariant subspace under A A A .
Example: R ( A ) R(A) R ( A ) is an invariant subspace under A A A .
Solution: Let x ∈ R ( A ) x \in R(A) x ∈ R ( A ) . Then, x = A y x = Ay x = A y for some y ∈ V y \in V y ∈ V . Then, A x = A ( A y ) = A 2 y ∈ R ( A ) Ax = A(Ay) = A^2y \in R(A) A x = A ( A y ) = A 2 y ∈ R ( A ) .
Example: Let M = s p a n { [ 1 1 ] } M = \text{s}pan \bigg\{ \begin{bmatrix} 1 \\ 1 \end{bmatrix}\bigg \} M = s p an { [ 1 1 ] } and A = [ 1 2 2 1 ] A = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} A = [ 1 2 2 1 ] . is M M M an invariant subspace under A A A
Solution: Let x = [ x 1 x 1 ] ∈ M x = \begin{bmatrix} x_1 \\ x_1 \end{bmatrix} \in M x = [ x 1 x 1 ] ∈ M . Then, A x = [ x 1 + 2 x 1 2 x 1 + x 1 ] = [ 3 x 1 3 x 1 ] = 3 x 1 [ 1 1 ] ∈ M Ax = \begin{bmatrix} x_1 + 2x_1 \\ 2x_1 + x_1 \end{bmatrix} = \begin{bmatrix} 3x_1 \\ 3x_1 \end{bmatrix} = 3 x_1 \begin{bmatrix} 1 \\ 1 \end{bmatrix} \in M A x = [ x 1 + 2 x 1 2 x 1 + x 1 ] = [ 3 x 1 3 x 1 ] = 3 x 1 [ 1 1 ] ∈ M . Thus, M M M is an invariant subspace under A A A .
Example: N ( A ) N(A) N ( A ) is an invariant subspace under A A A .
Solution: Let x ∈ N ( A ) x \in N(A) x ∈ N ( A ) . Then, A x = 0 ∈ N ( A ) Ax = 0 \in N(A) A x = 0 ∈ N ( A ) .
Definition: Powers of a linear transformation A A A are defined as follows:
A k = A ( A ( ⋯ ( A x ) ⋯ ) ) ⏟ k times A^k = \underbrace{A(A(\cdots(Ax)\cdots))}_{k \text{ times}} A k = k times A ( A ( ⋯ ( A x ) ⋯ ))
By using this definition, polynomials of a linear transformation A A A can be constructed as linear combinations of powers of A A A
p ( A ) = α 0 A n + α 1 A n − 1 + ⋯ + α n − 1 A + α n I p(A) = \alpha_0 A^n + \alpha_1 A^{n-1} + \cdots + \alpha_{n-1}A + \alpha_n I p ( A ) = α 0 A n + α 1 A n − 1 + ⋯ + α n − 1 A + α n I
where I I I is the identity transformation and α 0 , α 1 , ⋯ , α n \alpha_0, \alpha_1, \cdots, \alpha_n α 0 , α 1 , ⋯ , α n are scalars.
Property: A p ( A ) = p ( A ) A ⟹ A \ p(A) = p(A) \ A \ \implies A p ( A ) = p ( A ) A ⟹ A commutes with any polynomial of A A A .
Example: Show that A 2 = 2 A + 3 I A^2 = 2A + 3I A 2 = 2 A + 3 I for A = [ 1 2 2 1 ] A = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} A = [ 1 2 2 1 ] .
Solution: A 2 = [ 1 2 2 1 ] [ 1 2 2 1 ] = [ 5 4 4 5 ] = 2 [ 1 2 2 1 ] + [ 3 0 0 3 ] = 2 A + 3 I A^2 = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} = \begin{bmatrix} 5 & 4 \\ 4 & 5 \end{bmatrix} = 2 \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} + \begin{bmatrix} 3 & 0 \\ 0 & 3 \end{bmatrix} = 2A + 3I A 2 = [ 1 2 2 1 ] [ 1 2 2 1 ] = [ 5 4 4 5 ] = 2 [ 1 2 2 1 ] + [ 3 0 0 3 ] = 2 A + 3 I .
Example: Show that R ( p ( A ) ) and N ( p ( A ) ) R(p(A)) \text{ and } N(p(A)) R ( p ( A )) and N ( p ( A )) are invariant subspaces under A A A for any polynomial p ( A ) p(A) p ( A ) .
Solution: Let x ∈ R ( p ( A ) ) x \in R(p(A)) x ∈ R ( p ( A )) . Then, x = p ( A ) y x = p(A)y x = p ( A ) y for some y ∈ V y \in V y ∈ V . Then, A x = A p ( A ) y = p ( A ) ( A y ) ∈ R ( p ( A ) ) Ax = Ap(A)y = p(A)(Ay) \in R(p(A)) A x = A p ( A ) y = p ( A ) ( A y ) ∈ R ( p ( A )) . Thus, R ( p ( A ) ) R(p(A)) R ( p ( A )) is an invariant subspace under A A A .
Let x ∈ N ( p ( A ) ) x \in N(p(A)) x ∈ N ( p ( A )) . Then, p ( A ) x = 0 p(A)x = 0 p ( A ) x = 0 . Then, ( p ( A ) A ) x = ( A p ( A ) ) x = A 0 = 0 (p(A)A)x = (Ap(A))x = A0 = 0 ( p ( A ) A ) x = ( A p ( A )) x = A 0 = 0 . Thus, A x ∈ N ( p ( A ) ) Ax \in N(p(A)) A x ∈ N ( p ( A )) . Thus, N ( p ( A ) ) N(p(A)) N ( p ( A )) is an invariant subspace under A A A .
Definition: Let A A A denote the matrix representation of a linear transformation A : V → V A:V \to V A : V → V with A A A being an n × n n \times n n × n matrix. Then, the eigenvalues of A A A are the roots of the characteristic polynomial of A A A .
det ( s I − A ) = 0 \det(sI - A) = 0 det ( s I − A ) = 0
λ i = eigenvalues for A \lambda_i = \text{eigenvalues for } A λ i = eigenvalues for A
in other words, roots of det ( s I − A ) = 0 \text{ in other words, roots of } \det(sI - A) = 0 in other words, roots of det ( s I − A ) = 0
Definition: Vectors e i ∈ V e_i \in V e i ∈ V satisfying A e i = λ i e i Ae_i = \lambda_i e_i A e i = λ i e i are called eigenvectors of A A A corresponding to the eigenvalues λ i \lambda_i λ i .
Example: Find the eigenvalues and eigenvectors of A = [ 1 2 2 1 ] A = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} A = [ 1 2 2 1 ] .
Solution: det ( s I − A ) = det [ s − 1 − 2 − 2 s − 1 ] = ( s − 1 ) 2 − 4 = s 2 − 2 s − 3 = ( s − 3 ) ( s + 1 ) = 0 \det(sI - A) = \det \begin{bmatrix} s-1 & -2 \\ -2 & s-1 \end{bmatrix} = (s-1)^2 - 4 = s^2 - 2s - 3 = (s-3)(s+1) = 0 det ( s I − A ) = det [ s − 1 − 2 − 2 s − 1 ] = ( s − 1 ) 2 − 4 = s 2 − 2 s − 3 = ( s − 3 ) ( s + 1 ) = 0 . Thus, λ 1 = 3 \lambda_1 = 3 λ 1 = 3 and λ 2 = − 1 \lambda_2 = -1 λ 2 = − 1 .
For λ 1 = 3 , [ 1 2 2 1 ] [ x 1 x 2 ] = 3 [ x 1 x 2 ] ⟹ [ − 2 2 2 − 2 ] [ x 1 x 2 ] = 0 ⟹ x 1 = x 2 ⟹ [ x 1 x 1 ] = x 1 [ 1 1 ] \text{For } \lambda_1 = 3, \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = 3 \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \implies \begin{bmatrix} -2 & 2 \\ 2 & -2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = 0 \implies x_1 = x_2 \implies \begin{bmatrix} x_1 \\ x_1 \end{bmatrix} = x_1 \begin{bmatrix} 1 \\ 1 \end{bmatrix} For λ 1 = 3 , [ 1 2 2 1 ] [ x 1 x 2 ] = 3 [ x 1 x 2 ] ⟹ [ − 2 2 2 − 2 ] [ x 1 x 2 ] = 0 ⟹ x 1 = x 2 ⟹ [ x 1 x 1 ] = x 1 [ 1 1 ]
For λ 2 = − 1 , [ 1 2 2 1 ] [ x 1 x 2 ] = − 1 [ x 1 x 2 ] ⟹ [ 2 2 2 2 ] [ x 1 x 2 ] = 0 ⟹ x 1 = − x 2 ⟹ [ x 1 − x 1 ] = x 1 [ 1 − 1 ] \text{For } \lambda_2 = -1, \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = -1 \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \implies \begin{bmatrix} 2 & 2 \\ 2 & 2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = 0 \implies x_1 = -x_2 \implies \begin{bmatrix} x_1 \\ -x_1 \end{bmatrix} = x_1 \begin{bmatrix} 1 \\ -1 \end{bmatrix} For λ 2 = − 1 , [ 1 2 2 1 ] [ x 1 x 2 ] = − 1 [ x 1 x 2 ] ⟹ [ 2 2 2 2 ] [ x 1 x 2 ] = 0 ⟹ x 1 = − x 2 ⟹ [ x 1 − x 1 ] = x 1 [ 1 − 1 ]
Thus, e 1 = [ 1 1 ] and e 2 = [ 1 − 1 ] are the eigenvectors of A corresponding to λ 1 = 3 and λ 2 = − 1 respectively. \text{Thus, } e_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \text{ and } e_2 = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \text{ are the eigenvectors of } A \text{ corresponding to } \lambda_1 = 3 \text{ and } \lambda_2 = -1 \text{ respectively.} Thus, e 1 = [ 1 1 ] and e 2 = [ 1 − 1 ] are the eigenvectors of A corresponding to λ 1 = 3 and λ 2 = − 1 respectively.
Note that e 1 and e 2 are linearly independent. \text{Note that } e_1 \text{ and } e_2 \text{ are linearly independent.} Note that e 1 and e 2 are linearly independent.
Also, [ 1 1 ] and [ 1 − 1 ] are orthogonal. \text{Also, } \begin{bmatrix} 1 \\ 1 \end{bmatrix} \text{ and } \begin{bmatrix} 1 \\ -1 \end{bmatrix} \text{ are orthogonal.} Also, [ 1 1 ] and [ 1 − 1 ] are orthogonal.
Thus, R 2 = span { [ 1 1 ] } ⊕ span { [ 1 − 1 ] } \text{Thus, } \mathbb{R}^2 = \text{span} \bigg \{ \begin{bmatrix} 1 \\ 1 \end{bmatrix}\bigg \} \oplus \text{span} \bigg \{ \begin{bmatrix} 1 \\ -1 \end{bmatrix}\bigg \} Thus, R 2 = span { [ 1 1 ] } ⊕ span { [ 1 − 1 ] }
Theorem: Consider the linear tranasformation y = A x y = Ax y = A x with A A A being an n × n n \times n n × n matrix. Suppose that
I. C n = M 1 ⊕ M 2 ⊕ ⋯ ⊕ M k \mathbb{C}^n = M_1 \oplus M_2 \oplus \cdots \oplus M_k C n = M 1 ⊕ M 2 ⊕ ⋯ ⊕ M k
II. M i M_i M i is an invariant subspace under A A A for i = 1 , 2 , ⋯ , k i = 1, 2, \cdots, k i = 1 , 2 , ⋯ , k
Then, the transformation A A A can be represented as a block diagonal matrix.
A ˉ = [ A ˉ 1 0 ⋯ 0 0 A ˉ 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ A ˉ k ] \bar A = \begin{bmatrix} \bar A_1 & 0 & \cdots & 0 \\ 0 & \bar A_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \bar A_k \end{bmatrix} A ˉ = ⎣ ⎡ A ˉ 1 0 ⋮ 0 0 A ˉ 2 ⋮ 0 ⋯ ⋯ ⋱ ⋯ 0 0 ⋮ A ˉ k ⎦ ⎤
where A ˉ = P − 1 A P \bar A = P^{-1}AP A ˉ = P − 1 A P
P = [ p 1 p 2 ⋯ p k ] P = \begin{bmatrix} p_1 & p_2 & \cdots & p_k \end{bmatrix} P = [ p 1 p 2 ⋯ p k ]
p i = [ e i 1 e i 2 ⋯ e i n i ] p_i = \begin{bmatrix} e_i^1 & e_i^2 & \cdots & e_i^{n_i} \end{bmatrix} p i = [ e i 1 e i 2 ⋯ e i n i ]
n i n_i n i is the dimension of M i M_i M i and e i j e_i^j e i j is the j t h j^{th} j t h eigenvector of A A A corresponding to the eigenvalue λ i \lambda_i λ i .
Example: Let A = [ 1 1 − 1 − 1 3 − 2 0 0 1 ] A = \begin{bmatrix} 1 & 1 & -1 \\ -1 & 3 & -2 \\ 0 & 0 & 1 \end{bmatrix} A = ⎣ ⎡ 1 − 1 0 1 3 0 − 1 − 2 1 ⎦ ⎤ . M 1 = span { [ 1 1 0 ] , [ 0 1 0 ] } M_1 =\text{span} \bigg \{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \bigg \} M 1 = span { ⎣ ⎡ 1 1 0 ⎦ ⎤ , ⎣ ⎡ 0 1 0 ⎦ ⎤ } and M 2 = span { [ 0 1 1 ] } M_2 = \text{span} \bigg \{ \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \bigg \} M 2 = span { ⎣ ⎡ 0 1 1 ⎦ ⎤ } are invariant subspaces under A A A .
1- Is M 1 M_1 M 1 invariant under A A A ?
2- Is M 2 M_2 M 2 invariant under A A A ?
3- Change the basis in both domain and codomain to { b 1 1 , b 1 2 , b 2 1 } \{ b_1^1, b_1^2, b_2^1 \} { b 1 1 , b 1 2 , b 2 1 }
Solution: 1- Let x ∈ M 1 x \in M_1 x ∈ M 1 . Then, x = α 1 [ 1 1 0 ] + α 2 [ 0 1 0 ] x = \alpha_1 \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + \alpha_2 \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} x = α 1 ⎣ ⎡ 1 1 0 ⎦ ⎤ + α 2 ⎣ ⎡ 0 1 0 ⎦ ⎤ . Then, A x = [ 2 α 1 + α 2 2 α 1 + 3 α 2 0 ] = 2 α 1 + α 2 [ 1 1 0 ] + 2 α 2 [ 0 1 0 ] ∈ M 1 Ax = \begin{bmatrix} 2\alpha_1 + \alpha_2 \\ 2\alpha_1 + 3\alpha_2 \\ 0 \end{bmatrix} = 2\alpha_1 + \alpha_2 \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + 2\alpha_2 \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} \in M_1 A x = ⎣ ⎡ 2 α 1 + α 2 2 α 1 + 3 α 2 0 ⎦ ⎤ = 2 α 1 + α 2 ⎣ ⎡ 1 1 0 ⎦ ⎤ + 2 α 2 ⎣ ⎡ 0 1 0 ⎦ ⎤ ∈ M 1 . Thus, M 1 M_1 M 1 is an invariant subspace under A A A .
2- Let x ∈ M 2 x \in M_2 x ∈ M 2 . Then, x = α 1 [ 0 1 1 ] x = \alpha_1 \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} x = α 1 ⎣ ⎡ 0 1 1 ⎦ ⎤ . Then, A x = [ 0 α 1 α 1 ] = α 1 [ 0 1 1 ] ∈ M 2 Ax = \begin{bmatrix} 0 \\ \alpha_1 \\ \alpha_1 \end{bmatrix} = \alpha_1 \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \in M_2 A x = ⎣ ⎡ 0 α 1 α 1 ⎦ ⎤ = α 1 ⎣ ⎡ 0 1 1 ⎦ ⎤ ∈ M 2 . Thus, M 2 M_2 M 2 is an invariant subspace under A A A .
3- P = [ 1 0 0 1 1 1 0 0 1 ] P = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} P = ⎣ ⎡ 1 1 0 0 1 0 0 1 1 ⎦ ⎤ and P − 1 = [ 1 0 0 − 1 1 − 1 0 0 1 ] P^{-1} = \begin{bmatrix} 1 & 0 & 0 \\ -1 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix} P − 1 = ⎣ ⎡ 1 − 1 0 0 1 0 0 − 1 1 ⎦ ⎤
A ˉ = P − 1 A P = [ 1 0 0 − 1 1 − 1 0 0 1 ] [ 1 1 − 1 − 1 3 − 2 0 0 1 ] [ 1 0 0 1 1 1 0 0 1 ] = [ 2 1 0 0 2 0 0 0 1 ] \bar A = P^{-1}AP = \begin{bmatrix} 1 & 0 & 0 \\ -1 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & -1 \\ -1 & 3 & -2 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix} A ˉ = P − 1 A P = ⎣ ⎡ 1 − 1 0 0 1 0 0 − 1 1 ⎦ ⎤ ⎣ ⎡ 1 − 1 0 1 3 0 − 1 − 2 1 ⎦ ⎤ ⎣ ⎡ 1 1 0 0 1 0 0 1 1 ⎦ ⎤ = ⎣ ⎡ 2 0 0 1 2 0 0 0 1 ⎦ ⎤
Example: Let A = [ 1 0 0 − 1 2 1 0 0 3 ] A = \begin{bmatrix} 1 & 0 & 0 \\ -1 & 2 & 1 \\ 0 & 0 & 3 \end{bmatrix} A = ⎣ ⎡ 1 − 1 0 0 2 0 0 1 3 ⎦ ⎤ . Find the eigenvalues and eigenvectors of A A A .
Solution: det ( s I − A ) = det [ s − 1 0 0 1 s − 2 − 1 0 0 s − 3 ] = ( s − 1 ) ( s − 2 ) ( s − 3 ) = 0 \det(sI - A) = \det \begin{bmatrix} s-1 & 0 & 0 \\ 1 & s-2 & -1 \\ 0 & 0 & s-3 \end{bmatrix} = (s-1)(s-2)(s-3) = 0 det ( s I − A ) = det ⎣ ⎡ s − 1 1 0 0 s − 2 0 0 − 1 s − 3 ⎦ ⎤ = ( s − 1 ) ( s − 2 ) ( s − 3 ) = 0 . Thus, λ 1 = 1 \lambda_1 = 1 λ 1 = 1 , λ 2 = 2 \lambda_2 = 2 λ 2 = 2 and λ 3 = 3 \lambda_3 = 3 λ 3 = 3 .
For λ 1 = 1 , N ( A − I ) = N ( [ 0 0 0 − 1 1 1 0 0 2 ] ) = span { [ 1 1 0 ] } = e 1 \text{For } \lambda_1 = 1, N(A-I) = N \bigg ( \begin{bmatrix} 0 & 0 & 0 \\ -1 & 1 & 1 \\ 0 & 0 & 2 \end{bmatrix} \bigg ) = \text{span} \bigg \{ \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}\bigg \} = e_1 For λ 1 = 1 , N ( A − I ) = N ( ⎣ ⎡ 0 − 1 0 0 1 0 0 1 2 ⎦ ⎤ ) = span { ⎣ ⎡ 1 1 0 ⎦ ⎤ } = e 1
For λ 2 = 2 , N ( A − 2 I ) = N ( [ − 1 0 0 − 1 0 1 0 0 1 ] ) = span { [ 0 1 0 ] } = e 2 \text{For } \lambda_2 = 2, N(A-2I) = N \bigg ( \begin{bmatrix} -1 & 0 & 0 \\ -1 & 0 & 1 \\ 0 & 0 & 1 \end{bmatrix} \bigg ) = \text{span} \bigg \{ \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}\bigg \} = e_2 For λ 2 = 2 , N ( A − 2 I ) = N ( ⎣ ⎡ − 1 − 1 0 0 0 0 0 1 1 ⎦ ⎤ ) = span { ⎣ ⎡ 0 1 0 ⎦ ⎤ } = e 2
For λ 3 = 3 , N ( A − 3 I ) = N ( [ − 2 0 0 − 1 − 1 1 0 0 0 ] ) = span { [ 0 1 1 ] } = e 3 \text{For } \lambda_3 = 3, N(A-3I) = N \bigg ( \begin{bmatrix} -2 & 0 & 0 \\ -1 & -1 & 1 \\ 0 & 0 & 0 \end{bmatrix} \bigg ) = \text{span} \bigg \{ \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}\bigg \} = e_3 For λ 3 = 3 , N ( A − 3 I ) = N ( ⎣ ⎡ − 2 − 1 0 0 − 1 0 0 1 0 ⎦ ⎤ ) = span { ⎣ ⎡ 0 1 1 ⎦ ⎤ } = e 3
A is diagonalizable since e 1 , e 2 , e 3 e_1, e_2, e_3 e 1 , e 2 , e 3 are linearly independent. P A ˉ = A P P \bar A = AP P A ˉ = A P
P = [ 1 0 0 1 1 1 0 0 1 ] P = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} P = ⎣ ⎡ 1 1 0 0 1 0 0 1 1 ⎦ ⎤ and
[ e 1 e 2 e 3 ] [ 1 0 0 0 2 0 0 0 3 ] = [ 1 0 0 − 1 2 1 0 0 3 ] [ e 1 e 2 e 3 ] \begin{bmatrix} e_1 & e_2 & e_3 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ -1 & 2 & 1 \\ 0 & 0 & 3 \end{bmatrix} \begin{bmatrix} e_1 & e_2 & e_3 \end{bmatrix} [ e 1 e 2 e 3 ] ⎣ ⎡ 1 0 0 0 2 0 0 0 3 ⎦ ⎤ = ⎣ ⎡ 1 − 1 0 0 2 0 0 1 3 ⎦ ⎤ [ e 1 e 2 e 3 ]
A ˉ = [ 1 0 0 0 2 0 0 0 3 ] \bar A = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix} A ˉ = ⎣ ⎡ 1 0 0 0 2 0 0 0 3 ⎦ ⎤
Theorem: Let A A A be an n × n n \times n n × n matrix with distinct eigenvalues λ 1 , λ 2 , ⋯ , λ k \lambda_1, \lambda_2, \cdots, \lambda_k λ 1 , λ 2 , ⋯ , λ k . Then, the eigenvectors e 1 , e 2 , ⋯ , e k e_1, e_2, \cdots, e_k e 1 , e 2 , ⋯ , e k corresponding to λ 1 , λ 2 , ⋯ , λ k \lambda_1, \lambda_2, \cdots, \lambda_k λ 1 , λ 2 , ⋯ , λ k are linearly independent.
λ i ≠ λ j ⟹ e i when i ≠ j \lambda_i \neq \lambda_j \implies e_i \text{ when } i \neq j λ i = λ j ⟹ e i when i = j
Then the set of eigenvectors { e 1 , e 2 , ⋯ , e k } \{e_1, e_2, \cdots, e_k\} { e 1 , e 2 , ⋯ , e k } for a linearly independent set. Moreover,
span { e i } = N ( A − λ i I ) \text{span} \{e_i\} = N(A - \lambda_i I) span { e i } = N ( A − λ i I )
span { e 1 , e 2 , ⋯ , e k } = C n = N ( A − λ 1 I ) ⊕ N ( A − λ 2 I ) ⊕ ⋯ ⊕ N ( A − λ k I ) \text{span} \{e_1, e_2, \cdots, e_k\} = \mathbb{C}^n= N(A - \lambda_1 I) \oplus N(A - \lambda_2 I) \oplus \cdots \oplus N(A - \lambda_k I) span { e 1 , e 2 , ⋯ , e k } = C n = N ( A − λ 1 I ) ⊕ N ( A − λ 2 I ) ⊕ ⋯ ⊕ N ( A − λ k I )
A ˉ = P − 1 A P = [ λ 1 0 ⋯ 0 0 λ 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ λ k ] \bar A = P^{-1}AP = \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_k \end{bmatrix} A ˉ = P − 1 A P = ⎣ ⎡ λ 1 0 ⋮ 0 0 λ 2 ⋮ 0 ⋯ ⋯ ⋱ ⋯ 0 0 ⋮ λ k ⎦ ⎤
Proof: Can be found in the textbook.
#EE501 - Linear Systems Theory at METU